Optimization of Message Passing Libraries – Two Examples

نویسندگان

  • Frank Mietke
  • Rene Grabner
  • Torsten Mehlan
چکیده

The computation of large problems in scientifical and industrial fields needs for efficient computer systems. A rising number parallel computers are used to deliver necessary computation resources. The physical restrictions of circuit integration limit the speed of single processor solutions. The use of consumer components is a cost–effective way to built a parallel computer. The deployment of a high–speed network enables most parallel applications to run fast. Optimized communication libraries are important to enable the use of high–speed networks with parallel applications. This field is subject to active research and development. 1 An optimized Library on Top of SISCI Parallel computations need for information exchange to compute meaningful results. The quality of the network may have a significant impact on the execution speed of some applications. The Scalable Coherent Interface (SCI) was designed to satisfy demands for high bandwidth and low latency. Applications running on clusters of workstations may benefit from SCI interconnections. As of today a single SCI link can transmit up to 320 MByte per second [15]. Also network latancy is much lower than expected from conventional network links. The latency period for transmitting a 4 byte message is considered to be about 1.4 μs. Compared with conventional ethernet communication over TCP/IP these values are realy impressive. The small delay of network transmissions is considered to be the highlight of SCI technology. Even the upcoming Infiniband technology has a latency larger than 4 μs [16]. Depending on the behaviour of the applications the use of SCI networks can lead to a significant speedup. 1.1 General Overview Many cluster computers are equipped with a SCI network. The native interface to the SCI services is known as Software Infrastructure for SCI (SISCI). This specification defines a programming interface that follows the Distributed Shared Memory (DSM) paradigm. The programmer operates on memory areas that are shared by several processes. The participating processes may run on different nodes located anywhere inside the SCI cluster. In contrast to the SISCI interface the message passing paradigm exposes inter–process communication. The data transfer has to be handled by executing dedicated send functions and receive functions. This paradigm is widely used. Many distributed applications rely on the presence of message passing libraries. To enable the use of this kind of applications with SCI the VIA2SISCI library was developed. The aim was to implement a message passing library that uses SISCI functions to provide fast data transfer. The Virtual Interface Architecture (VIA) contains the definition of a message passing interface.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Performance and Productivity Study using MPI, Titanium, and Fortress

The popularity of cluster computing has increased focus on usability, especially in the area of programmability. Languages and libraries that require explicit message passing have been the standard. New languages, designed for cluster computing, are coming to the forefront as a way to simplify parallel programming. Titanium and Fortress are examples of this new class of programming paradigms. T...

متن کامل

Disciplined Message Passing

This paper addresses the question of whether message passing provides an adequate programming model to address current needs in programming multicore processors. It studies the pitfalls of message passing as a concurrency model, and argues that programmers need more structure than what is provided by today’s popular message passing libraries. Collective operations and design patterns can help a...

متن کامل

MPI++: Issues and Features

The draft of the MPI (Message-Passing Interface) standard was released at Supercomputing '93, November 1993. The nal MPI document is expected to be released in mid-April of 1994. Language bindings for C and FORTRANwere included in the draft; however, a language binding for C++ was not included. MPI provides support for datatypes and topologies that is needed for developing numerical libraries, ...

متن کامل

A Case Study in Exploiting Layers to Optimize Scientific Software

This paper presents a case study in improving the performance of layered scientific software using library-level optimization. We augment our previous work to include a notion of layers, and apply the technique to the three layers that make up the PLAPACK parallel linear algebra library—a global application level, an internal layer, and an MPI message passing layer. We show how significant perf...

متن کامل

Broadway: A Software Architecture for Scientific Computing

Scientific programs rely heavily on software libraries. This paper describes the limitations of this reliance and shows how it degrades software quality. We offer a solution that uses a compiler to automatically optimize library implementations and the application programs that use them. Using examples and experiments with the PLAPACK parallel linear algebra library and the MPI message passing ...

متن کامل

Libraries on Ibm Sp{2

IBM SP{2 has become a popular MPP for scientiic community. Its programming environment includes several message passing libraries. In this paper, we analyse performance of communication operations in 3 libraries installed on SP{2, namely Parallel Virtual Machine (PVM), Message Passing Library (MPL), and Message Passing Interface (MPI).

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003